Search

Your search keyword '"Zhou, Denny"' showing total 110 results

Search Constraints

Start Over You searched for: Author "Zhou, Denny" Remove constraint Author: "Zhou, Denny"
110 results on '"Zhou, Denny"'

Search Results

1. Best Practices and Lessons Learned on Synthetic Data for Language Models

2. Chain of Thought Empowers Transformers to Solve Inherently Serial Problems

3. Chain-of-Thought Reasoning Without Prompting

4. Transformers Can Achieve Length Generalization But Not Robustly

5. Premise Order Matters in Reasoning with Large Language Models

6. Self-Discover: Large Language Models Self-Compose Reasoning Structures

7. Gemini: A Family of Highly Capable Multimodal Models

8. Instruction-Following Evaluation for Large Language Models

9. Large Language Models can Learn Rules

10. Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models

11. FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation

12. Large Language Models Cannot Self-Correct Reasoning Yet

13. Large Language Models as Analogical Reasoners

14. Large Language Models as Optimizers

15. Simple synthetic data reduces sycophancy in large language models

16. Large Language Models as Tool Makers

17. Training Socially Aligned Language Models on Simulated Social Interactions

18. Mixture-of-Experts Meets Instruction Tuning:A Winning Combination for Large Language Models

19. A Pretrainer's Guide to Training Data: Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity

20. Not All Semantics are Created Equal: Contrastive Self-supervised Learning with Automatic Temperature Individualization

21. PaLM 2 Technical Report

22. Symbol tuning improves in-context learning in language models

23. Teaching Large Language Models to Self-Debug

24. Larger language models do in-context learning differently

25. Large Language Models Can Be Easily Distracted by Irrelevant Context

26. The Flan Collection: Designing Data and Methods for Effective Instruction Tuning

28. What learning algorithm is in-context learning? Investigations with linear models

29. TEMPERA: Test-Time Prompting via Reinforcement Learning

30. Scaling Instruction-Finetuned Language Models

31. Transcending Scaling Laws with 0.1% Extra Compute

32. Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them

33. Mind's Eye: Grounded Language Model Reasoning through Simulation

34. Language Models are Multilingual Chain-of-Thought Reasoners

35. Recitation-Augmented Language Models

36. Compositional Semantic Parsing with Large Language Models

37. Rationale-Augmented Ensembles in Language Models

38. Emergent Abilities of Large Language Models

39. Least-to-Most Prompting Enables Complex Reasoning in Large Language Models

40. UL2: Unifying Language Learning Paradigms

41. PaLM: Scaling Language Modeling with Pathways

42. Token Dropping for Efficient BERT Pretraining

43. Self-Consistency Improves Chain of Thought Reasoning in Language Models

44. DeepFusion: Lidar-Camera Deep Fusion for Multi-Modal 3D Object Detection

45. Auto-scaling Vision Transformers without Training

46. Provable Stochastic Optimization for Global Contrastive Learning: Small Batch Does Not Harm Performance

47. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

48. A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation

49. SMORE: Knowledge Graph Completion and Multi-hop Reasoning in Massive Knowledge Graphs

50. Speeding up Deep Model Training by Sharing Weights and Then Unsharing

Catalog

Books, media, physical & digital resources